Typically, the complexity of the monolith is a major problem. Its sheer size, poor documentation and the absence of its original developers often make it difficult for us to understand its internal components and their dependencies. There may be clues such as namespaces – but no one can guarantee that the coding guidelines were followed with the same vigor across all developers. When using black magic, static code analysis doesn’t help anymore and we live in constant fear that a change could have unintended side effects.
As soon as we understand the old code, several refactorings are likely to come to mind: Fixing something quickly here, simplifying another thing there along the way. But the monolith is complex, and before we know it, a very shaggy yak is smiling at us and we would rather not shave right now. That is why it is important to have a strategy with many backdoors. As soon as we realize that we are mistaken, we have to flee quickly back to the last commit, take a deep breath of relief, and then start fresh again. But wait a minute, isn’t that a standard problem in software development? Yes, it is! And there is also a standard solution: automated software testing.
IPC NEWSLETTER
All news about PHP and web development
IPC NEWSLETTER
All news about PHP and web development
Automated Black Box Testing
In the best case, we cover all functions of the features to be extracted as a microservice with black-box-tests e. g. written with Behat. We click through the feature in the monolith and record our observations as test expectations. In this way, we can not only secure the functionality, but also gradually learn the details of the feature to be extracted. For example, we can test HTTP status code and specific page contents for the feature start page, such as a login function and operations like create, read, edit, delete, and send a search form and the corresponding results list. With this exploratory approach, we should consider whether there are different user groups (e. g. administrators) and associated rights and test the access protection for each of these rights.
Unit tests are less relevant in this context. At this point in time, we don’t know enough about the monolith or the microservice to be extracted. We cannot say about each unit: This is its context and therefore it is relevant or irrelevant to us.
With the black box tests we can approach the question of which tables the microservice needs and which data rows we need for our test fixtures. Later we will be able to narrow it down even further, but for the moment we want to record our knowledge. For example, in Slimdump, a tool for highly configurable MySQL dumps, we can create, version, and share a configuration file with our colleagues. In addition to selecting tables and data rows, we can configure that user names and e-mail addresses of a user table are only dumped anonymously. Or you can specify that, for performance reasons, we only want to dump ten percent of the data records and not BLOBs in that table.
From scratch or clone?
How do we actually start with the Microservice? Should we start at ground zero on a green field or as a clone of the monolith from which we cut away everything that does not belong to the Microservice? There are good reasons for both variants, but for me the decision is reduced to weighing up these three essential criteria:
- The quantity of restrictions: On the green field we start with minimal restrictions, in the clone we take along its complete technical world.
- The use of old meta-data: In my experience, the Commit Message History is often the only way to understand a place with particularly crazy code – especially when a ticket number makes the context of the last code changes clear.
- The latency up to the activation of the microservices: If we start our Microservice on the greenfield, we have a very high latency until it can be switched live; it has to be developed from scratch. On the other hand, if we create the microservice as a clone of the monolith and run it on our own host, it is in principle immediately transferable to the live system. We simply set up some kind of proxy (e. g. Varnish or Apache rewrite rules), which directs requests to the microservice and to its host. All other requests go to the monolith host as before. We may still have to deal with details about cookies, sessions and URL rewriting – but that still means much less latency than the complete greenfield development.
In my experience, the arguments for starting with the clone predominate. I suppose that this way is generally also more economical, because as much fun the green field development may be, it seems to me to be only a euphemism for a partial rewrite. The clone on the other hand is the basis for a refactoring.
Probably there are also projects with special circumstances, where the green field is the much better decision – but I haven’t worked on such projects yet. Therefore, the rest of the article deals with the cloning pathway.
If we have already installed the monolith, we should keep it as far as possible until the microservices go live. In the course of the project, we will make decisions based on heuristics, which can turn out to be wrong only days later. Maybe we cut away too much code, maybe we simplify it too much. And only afterwards we realize that we lacked a test that would have indicated exactly the problem. At such moments it is extremely helpful to be able to quickly check in an executable version of the monolith how a certain process worked correctly.
Detection of unused resources
Once we have set up our future microservice as a clone of the monolith, the question arises: how do we recognize the unused resources that we need to cut away so that only the microservice remains? In general, the following equation applies: Unused resources = all resources – used resources.
All resources of a kind are typically already available as a list (e. g. files with ls). The used resources are determined by our black-box-tests. All we need to do is activate a suitable form of coverage logging, run the tests and then convert the coverage into a meaningful format. Finally, we calculate their difference and thus get the unused resources.
Our tests therefore have a dual role: firstly, we ensure that the code is correct. Secondly, we use their coverage to determine the unused resources. Let’s now take a closer look at how the determination of the used resources works for different resource types.
YOU LOVE PHP?
Explore the PHP Core Track
Used PHP files
Most PHP frameworks process requests using a front controller. We can easily hook in this controller and log the code coverage with a tool like xdebug. Then we display the paths of the used files in a file used-files.txt (Listing 1).
<?php // have coverage collected xdebug_start_code_coverage(); // original front controller $app = new App(); $app→handle($_REQUEST); // write paths of used files $outFile = fopen('used-files.txt', 'a'); fwrite( $outFile, implode(PHP_EOL, array_keys(xdebug_get_code_coverage())) ); fclose($outFile);
We could also use sysdig for this, a tool for monitoring and analyzing system calls and Linux kernel events. The big advantage of this is that we not only capture used PHP files, but also all open files – for example configuration files and view templates. The biggest disadvantage, however, is that the required scope is only available on Linux.
Composer packages and MySQL tables
If we filter the files, which are located in a subdirectory of the vendor directory of the Composer, from the files in used-files.txt, we can read the used Composer packages directly from their paths.
Coverage logging is easy to activate, e. g. with the following SQL statements:
SET global general_log = 1; SET global log_output = 'table';
If we now execute our tests, the SQL queries are logged in the table mysql.general_log (which we probably want to truncate before). From these queries we can extract the names of the tables used. However, this is too tedious to do manually. On the one hand, there will typically be a large number of queries; on the other hand, we would have to take a close look at where table names can appear in each query; for example, comma-separated in the FROM clause, in the JOIN clause, and in subqueries.
Used frontend assets
The paths to used frontend assets such as images, fonts, JavaScript and CSS files can be found as hits in the web server access logs. With a regular expression we can filter them out. For the Apache standard access log format, this is #”(?:get|post) ([a-z0-9\_\-\.\/]*)#i. But there are some difficulties:
- Assets download: The default Behat-setup uses Goutte as web browser that does not download images or executes JavaScript or CSS. That means, these hits are missing in the logfile. As a solution, other browsers or browser drivers can be connected in Behat. Via Selenium also Firefox, Chrome or even an armada of browser stack can be connected.
- Concatenations: For years, we have improved the performance of web applications by concatenating JavaScript and CSS in a few files to reduce the number of TCP connections to our server. This approach is outdated with HTTP/2, but is often found in legacy monoliths. Then a statement like “screen.css and app.js are used” is not very helpful. The easiest way could be to turn off concatenation and embed the source files directly in the HTML code, as long as it is still easy to find out which file belongs to which page. If not, the coverage could be evaluated at line level within the files in conjunction with source maps. Row-based coverage however is a completely different problem.
- Automated line coverage in JavaScript and CSS: There are a variety of coverage logging tools for JavaScript, including Istanbul, JSCover and Blanket.js. These can be connected to JS testunner like Karma or Jasmine. Together with the actual JavaScript tests, there may be some additional effort necessary.
With CSS, the situation is even more difficult. But things are in motion here. For example, Chrome has its own panel for CSS coverage since version 59. To determine the coverage, all selectors in the loaded CSS files are checked whether they apply to the loaded document. If so, the selectors and the associated statements are marked as used. This is not perfect, but it seems to be a useful heuristic. Unfortunately, neither input nor output can be easily automated for this process. It is to be hoped that corresponding methodes will be added to the puppeteer-API soon.
If you really want to automate this process right now, you can use the Firefox plug-in Dust-Me Selectors for example. There you can enter a sitemap and export the resulting coverage at file and line level as JSON.
Automation with Zauberlehrling
Zauberlehrling is an open source tool to assist in the extraction of microservices. In particular, it automates some steps to detect unused resources:
- This command displays the unused PHP files:
bin/console show-unused-php-files --pathToInspect --pathToOutput --pathToBlacklist usedFiles
. The input is the file used-files.txt that we created above. In addition, the path to be scanned on the file system, an output file and a blacklist can be configured, e. g. to exclude temp directories or those that are known not to be covered by the black box tests (e. g. paths for unit tests). - The following statement shows the supposedly unused Composer packages:
bin/console show-unused-composer-packages --vendorDir composerJson usedFiles
. The quality of the statement correlates directly with the content of the fileusedFiles
. The tool Zauberlehrling considers a package to be used if at least one file in it is used. For example, if the file usedFiles only contains PHP files determined with Xdebug, the composer packages consisting exclusively of view templates or configuration are never recognized as being used. They are always marked as unused.usedFiles
files that are created with sysdig are therefore more advantageous here. - The command
bin/console show-unused-mysql-tables
uses an SQL parser from the MySQL log table to determine the used tables, calculates the difference to all tables, and displays the tables that are supposedly unused. - The following instruction shows the supposedly unused frontend assets:
bin/console show-unused-public-assets --regExpToFindFile --pathToOutput --pathToBlacklist pathToPublic pathToLogFile
. It takes the paths of the public directory and access logs as input, and can be configured with the regular expression to recognize the file paths in the access log, the output file and a blacklist (as with unused PHP files).
Zauberlehrling’s automation makes it possible to work in short development cycles. After an initial coverage run, the cycles may look like this:
- delete an unused resource
- run tests (for the sake of speed without coverage)
- restore deleted resource if necessary, fix code or tests
- commit
- back to 1 or abort
Once the recognized unused resources have been deleted, a test run with code coverage should be carried out again. It’s also worth taking a look at the list of used files. Maybe you can see some “low hanging fruits” here. For example, if only a few files are needed in a composer package, we might be able to remove them as dependencies. Perhaps we will find abstractions that are superfluous in the context of our microservice, which we can now simplify. Afterwards, of course, you must not forget to run the tests again.